Computational Media

[RTSS] Interaction

Web sketch: LINK

One interaction on the web that I found compelling is the WebGL example for lens flare.

I found the open-world space navigation very fluid and intuitive. It matches the feeling of flying well. To navigate the space, you only need one mouse input, which will influence the direction you’re looking at and whether or not you’d like to move forward. To stay still, your mouse can stay in the center. To change the direction of the camera, you would push it towards the direction you’d like to pan to. The further the mouse is from the camera, the faster the camera will pan. If you click the mouse, the camera will move forward so that you can move in space.

I tried out implementing this interaction to my previous week homework assignment. I did this by removing the first person controls and creating my own camera movement. I had a mouse tracker (ev.clientX and ev.clientY) that took note of where the mouse position was in relation to the center of the canvas. Then, depending if the mouse was facing left, right, up, or down, the camera would rotate towards that direction. Next, I tried to push the camera to move whenever the mouse was held down. I had a bit of trouble with this—the camera was only be pushed once per mouse click, but I wanted the pushing to be continuous as the mouse was held down.

Additionally, when I translated this interaction to my bathhouse, I realized that this interaction wasn’t as suitable as it was for the space-like flying movement in the example I found. The anti-gravity, flowy movement didn’t match as well with my box-shaped bathhouse, where the user would expect a more rigid walking movement.

[Storytelling] Retelling a Story

Helen Lin - Storytelling for Project Development - Spring 2025

Brief

For my retelling, renamed dorm room, I would like to rework this interactive storytelling project that I built with my friends (JZ on code, Kathleen on art direction, Sarah and I on illustration, and Sav Du on music) in my sophomore year of undergraduate study. 

“dorm room”, previously called R o o m, takes you through a college dorm room in four different states (one in each month) and encourages you to poke around, gleaning details into what kind of person the student is through their belongings and how the belongings change over time. 

Original Version

This original storyboard has a linear story structure, taking the viewer through the room in four different states (one for each season). It starts with the beginning of the semester, unpacking boxes, through winter and spring, and then to the end of the semester, packing the belongings back up. 

The original is more of an animated comic strip, but I want to make it more interactive with the storytelling elements being outlined more clearly. 

We built this for a hackathon so we were working on a tight deadline and rushed some components. Initially, we wanted this character to be a lot messier, so in the new rendition, I want to exaggerate this aspect. 

Reworked Version

I want to rework this piece into having a stronger story by fleshing out more literal glimpses into the owner of the room (who I will call JC). For the new version, I’d like to create a more meandering spiral story structure by creating more access points in learning different aspects about JC’s life. 

Firstly, I mocked up some of the storytelling elements that I wanted to incorporate by mapping out the elements and adding text boxes to narrate for each object. I mostly wanted to focus on some repeating aspects of JC’s life, mainly the physics class, playing lacrosse, and a brief romance with a girl named Justine. 

Mockup: dorm room mockup

I want the audience’s role will be to visit (1) or spectate (2). 

To technically execute this new version, I wanted to practice what I’ve learned last semester in ICM by rebuilding the interactions using p5. I organized my files on my local computer and set up my local working environment using VSCode. 

It took quite a bit of time to gather the exact position I wanted to place each object, so for the time being, I’m using the exact pixel number of each (x,y) coordinate. To quickly estimate each point, I put a mouseX and mouseY tracker on the top left. 


Next, I drafted a thought bubble text for each clickable item. These texts would only show up when a specific region (mapped to the position of the object) is clicked. I reused the code of detecting a new mouse click from a different previous sketch I worked on. Whenever there was a newClick(), the code will display or hide a text box when applicable. The text is different for each clickable object. It was also important to me to change the cursor to a hand when hovering over a clickable object to make it intuitive to use. 

One of the challenges I faced was to get the animation gif play on each new hover without needing to reload each time. I haven’t had the time to research and figure this out, but this would improve the user experience. For the time being, the gif only plays once while actively on the page, but resets once the page is changed. 

For this iteration (since I was reusing illustrated assets), a lot of the creative freedom this time went into writing the text. I was able to support my meandering story structure by choosing what to highlight in the text. 

All in all, it helped to push myself to throw a prototype together very quickly while focusing on the aspects of interactive storytelling I have the less experience with (writing, coding, interaction design).

[RTSS] Light and Shadows

Link to sketch: LINK

Inspiration image from Library of Congress

I appreciate the ease of which I can navigate the space in newart.city exhibitions (leymusoom being one of my favorites—shared by my friend Vinh). The UI feels intuitive and familiar. The tutorial page is succinct and placed at a location that feels both non-disruptive yet easy to find. I feel excited by the potential of this tool and how it can make 3D environment building more accessible and experimental. I’m more excited by moments where elements of the space can be exaggerated and abstracted. I'm also fascinated by the idea of walking through wall text as if it was a 3D space. One thing I did notice was how it’s important to have responsive touch in order to simulate the feeling of my body / brain / eyes walking through the space itself. Otherwise, the lag made me feel detached from the avatar and I didn’t feel “present”. It’s possible my internet connection wasn’t fast enough to render the details of the models.

The artworks listed looked interesting in Coco Sui’s The Third Space, but I couldn’t make it through the front door.

Shashank Satish’s “Digital (Dis)embodiments” made me think a lot about how text can be navigated through 3D space. How does the way its laid out in this additional dimension change the way we read it?

[RTSS] Public Spaces on the Internet

Link to sketch: LINK

Blog: Write a post on your blog about a (physical) public space you use personally. This might be your local park or green space, a library reading room, a cafe in your neighborhood, or something else entirely. Why do you spend your time in this space rather than any other?  What do you like about it?  How do you engage with others (if at all) within this space?

The public space that I use most frequently and prominently is the MTA subway. There's a reason why so much media and content about the space is produced by the people who live here. The people who use it are often required to use it at a high frequency to carry out their daily tasks. It has such recognizable sounds (the screeching of the train accelerating, the rumbling against the tracks, "stand clear of the closing doors", muffled train conductor announcements) and recognizable sights (faces buried into their phones, flashing lights from the windows underground, salt and stains on the floor, littered cups of coffee rolling around). Perhaps because of this overstimulating environment, most people tend to retreat inwards, avoiding as much interaction with each other as possible. Since I've gotten noise cancelling earphones, I felt my subway experience has improved drastically. It's a public place, yet my daily commutes feel like a very private time to me. Here, I often sit, put my earbuds in, pull up my mask, and escape into my mind to hasten the experience of the ride. It's a place that is a necessity for me to be in when I use it, but it's also become sentimental. I've done homework, cried, scarfed down quick meals, written journal entries, crafted gifts, napped, watched videos, and got to know people on the train. Despite the lack of comfort, hygiene, and predictability, it's become an important place where people have done so much living and growing up. 

[ICM] Webpage Final Project

Link to project: LINK

For my final project, I wanted to build a functioning website where I can browse through all the discarded textiles that have been acquired as materials in my studio. For each item acquired, I wanted to show its different states, starting with just three visual representations (front side, back side, and the goodbye letter from the person who donated the item). I wanted the layout to change up on every click and It was important for this website to be scalable (since it’s likely that I’ll have many more textiles added to the fabric stew over time) and also responsive depending on the size of the browser. This work ended up working more with DOM model, HTML, and CSS than in previous assignments.

Since I was working with a lot of images (120 minimum), I quickly realized I had to move off the p5 web editor and use local host to preview the interactions on my webpage. With the help of Lucia from The Coding Lab, she quickly introduced me to VSCode and an easy way to go live / preview the webpage (via the “Go Live” button on the lower right). I downloaded my files from the p5 web editor into a package and set up my station on my local computer accordingly.

Next, to create a responsive webpage, I used flexbox to organize all the buttons (each button would trigger image loading) and made the javascript code constantly check if the window has been resized. The initial setup would create a p5 canvas corresponding to the size of the window screen and then resize the p canvas if the window screen has been resized. Then, I made sure that later when I was setting the positions for each image, they would fall within the range of the most recently sized canvas.

Next, I wanted to make it so that every time one of the buttons were clicked, the images could layer on top of each other and create new original compositions every time. The positions and sizes for each image would be randomly generated with each button click. I set the back photo to always be partially transparent and the blend mode for the goodbye letter to multiply. That way, it could create interesting overlays and textures upon each button click as well.

Oftentimes, coding the image handling and positioning was the most challenging part of this project. I didn’t want to overload the browser to load too many images at once, and my teacher Allison helped me a lot in problem-solving this code. With my current model, the sketch only loads the number’s corresponding images into an array when prompted to by the button click (just three images). The draw() loop renders the image over and over again. And finally when there is a new button click, the entirety of the images stack is released and a new series of images are pushed back in. Initially, I wanted to make it so that DOM images are created instead of p5 images. That way, I can have an isolated sketch just for holding the 3d object for each button click. However, I had trouble with switching my code out from p5 image objects to HTML elements. This is something I want to look into in the future. At one point, I want to include more context into this webpage, maybe an about page or such, and refine the layout design.

[ICM] Media: Sound

Link to sketch: LINK

For this project, I wanted to play around with music visualization and manipulation. It’s more likely that I’ll be working with premade .mp3 files in the future rather than create my own music / sound piece, so I decided to practice using p5.FFT and loadSound() rather than experiment with p5.Oscillator.

Firstly, I imported all the sounds I’d be using into the preload(). I loaded 2 amazing dance tracks from two fave artsists that I listened to on repeat during late nights in 2019 and 4 instrumental sound bites I found royalty free online.

Next, I created two sliders and two buttons. The first slider will allow you to change the volume, the second slider will allow you to change the song speed, the play button allows you to play/pause the song, and the change song allows you to cycle between the two tracks. In the future, this project could be expanded further by adding more songs into the song[] array. I could even take it one step further by making each song an object that contains String variables song title, artist, duration, etc. Then, I could make a music player display so that the user of this interface can see what tracks are available to cycle through and the information for each.

I used the togglePlaying() code to switch between button states that Pedro from my Physical Computation class used in his p5 serial communication code. It was an easy way to show functionality as well as song state using the button.

Additionally, I used the keyTyped() code from Allison Parrish’s sound example for triggering events based on keyboard presses. When the user presses the characters ‘a’, ‘d’, ‘s’, or ‘f’, the character’s respective percussion sound plays. Ideally, the user would play around and trigger the drum sounds in time with the music playing (kind of like shaking the tambourine during karaoke).

Next, I wanted to create a visualizer for the song. I’ve always wanted to try out 3D graphics within the browser. We didn’t have time to go into it within the scope of ICM, but I still wanted to try it out a bit. I created a 3D cone and allowed orbitControl() so the 3D space can be spun around a bit using the trackpad. I didnt’ like the look of adding lights() so I decided to keep it stylized without shading. However, this meant that upon first glance, the user wouldn’t be able to tell that the space was 3D, so I added a rotation animation to the 3D space.

Then, I used p5.FFT to analyze the song’s sound and create a visual from it. I used the FFT bars example from class and adapted it to create an abstracted spiral using the rectangles. It’s harder to decipher, but does create an interesting visual. I’d like to play around more with the visualizer graphics in the future.

Here it is in motion~

[ICM] Week 6: Arrays and Objects (Catch the Roach)

Link to sketch: LINK

I used my interactive cloud code from last week’s assignment as a base for instances of an object moving left to right. For this assignment, I replaced the clouds with roaches and turned it into a clicker game. When we first moved into my new (current) Astoria apartment about a couple months ago, we had a bit of a roach problem. One of my roommates is very squeamish around roaches and this game was inspired by him. Usually if he ran into one, I would go in and either chase it or clean up the corpse.

I used pixel graphics found from online search, which was imported using preload() from an assets folder. When a new roach is “born”, its width, height, type (there are 3 different breeds of roaches in this game), and speed are randomly generated, though within specified ranges.

 

For this game, I had two different arrays storing information. There was the Bugs[] array that kept track of the bug objects on the screen and another array typesOfBugs that stored the image data for the three bug graphics. Bugs[] was parsed through every draw() cycle. For each bug on the screen, it’d show on screen, move to its next respective x and y position, remove itself if it has gone past the screen, and check for mouse interaction. typesOfBugs[] was much simpler in that it was simply used so that the roach image could be randomly generated depending on index (ex: random(0,2) for a three image array).

For the mouse interaction, checkClicked() is a Bug class specific function that checks if the mouse click was within bounds of its respective image. I added moE (margin of error) to make the game more playable. Otherwise, it’d feel like the user was clicking the bug, but not getting the squish. The higher the moE, the easier the game would be. Next, I added a points system so that the smallest bugs gave the most points, and then the small-ish or fast bugs gave slightly higher points.

Lastly, I added a health bar to create some stakes. Whenever a roach passed through the canvas without being clicked, the health bar would deplete. When fully depleted, the health bar and points would reset. The game still keeps track of total bugs squished though, just for pure player satisfaction.

[ICM] Week 5: Functions (Pet the Fish revamped)

Link to sketch: LINK

I’ll likely keep iterating upon this “pet the fish” scene as I continue to gain new skills and knowledge of different interactive components. This time, I wanted to try out the fish and hand with rasterized images. I used preload() and image() to import and place the images.

Next, I used a class constructor to make the clouds interaction organization more efficient. Instead of having a standalone series of moving ellipses. Each cloud had its own properties and placement. The x, y, w, h are self-explanatory, but ppf stands for pixels per frame (indicating a different movement speed for each. Whenever, createNewCloud() is called, it generates properties within a set range and pushes a new cloud into the clouds array stack. I separated these into two different functions so it would be easier to change the range of cloud sizes and positions without having to change the Cloud constructor / class itself.

I created an array of clouds (clouds[]) which is set to always have ~10 clouds. In the setup(), a for loop will push 10 clouds into the array and then within the draw() function, a for loop will parse through the clouds[]. This was the most challenging part of the code to work through so far. I wanted to make it so that if the cloud has fully passed the screen, it removed itself from the array. At first, I put clouds.length-1 within the for loop, but it kept generating way too many clouds. I think it’s because clouds.length was continually being updated as I added/removed from the array. Once I took that calculation out to be done before going into the loop, it solved my issue.

Lastly, I added a click counter to game-ify the interaction. I want to keep working on the visuals for this, for sure. This project is still yet to be fully realized. I want to bring back the petting fish mechanism / interaction again, but I was only able to complete the clouds so far.

[ICM] Repetition with loops

Link to sketch: LINK

For Week 5’s assignment, I wanted to practice using some new functions in p5 that I’ve seen in some classmates’ projects: modular functions, sliders, preloading images, etc). This ended up shaping my end result more than any creative idea itself. A lot of the coding work I did this round was unplanned and through experimentation. It was fun to work this way for a change, but I do think the end result would be stronger with more concept work and planning.

I grabbed a portion of this code from my previous retro tv assignment.

When trying to create a repeating design using for loops, I found it more challenging than expected. It was more so the math part that was difficult than finding the corresponding code for it. I racked my brain for anything I could remember from high school calculus BC… but my brain was empty. I officially have lost that knowledge. The distance code was a math calculation that I have forgotten and found in a p5 sample code online. I experimented with using distance to create an interactive design, but didn’t end up integrating it into my final work. Instead, I played with sliders.

I was drawn to how the design looked rotated. However, it was rotating across 0,0. I wanted to center the design so it would create a spiral around the middle. I did this by using push() and pop(). I also loaded up a background space image using preload() in setup().

The slider would change the scale of the spiral as it rotates. I thought it would be interesting to have it so that every time you clicked, a star would be left behind as part of the night sky. This would also make good use of my newClick() function.

Hooray for math calculations found online done by someone who is not me. If I didn’t find this code online, I likely would have settled with a circle or a quad.

It was a bit tricky to get the starStack array to work. I’ve done nested arrays before, but not in Javascript. I wasn’t sure exactly what properties and functions arrays had in Javascript. I kept a clickCounter to keep track of how long the array was. It took a bit of time to make sure the array was never called upon when empty and to make sure that the loop referencing the array’s components never referred to a null object. Once it worked out, it worked out well!

One of many end results:

[ICM] Rule-based Animation and User Interfaces

Link to sketch: LINK

For this assignment, I was working with my friend Khalinda who came up with the idea of a tv screen and remote that we can use to turn on and off an animation using a button. Khalinda prepared the drawing using a generous amount of translate(), push(), and pop() to allow for easy rotation without letting it affect all the other shapes being drawn.

Next, I changed up the colors and started working on the button interactivity. I created some global variables that I anticipated I would need.

We had a separate playAnimation() function to organize our code a bit. playAnimation() would only be called if the tv was on. It would draw an X shape of spinning squares centered in the middle of the tv screen. We used push() and pop() before and after it was being drawn because the squares were consistently being located.

Then I added another interactive yellow button that only works if the tv is already on. While the animation is playing, if the yellow button is pressed, it would slow down the spin (by reducing the rotateConstant).

Lastly, I added a repeating grid of squares to snazz up the design. The nested for loops helped me draw a square every 40 pixels. I realized I forgot to set the stroke and fill after I ran this code, but then realized I actually liked it better that way. It was cool to see the size and color change slightly as the buttons were being pressed and animations were being played as well.

[ICM] Animation and Variables

I started my animation with moving clouds from left to right. I used a variable “cloudXStart” to keep track of where the X coordinate of the first cloud was. After the clouds had fully run their cycle (I knew this was roughly 2.5 times the total X or width distance, I would set the cloudXStart to repeat by pushing it back to -50. The reason why I set it to -50 instead of 0 was because I wanted the center of the cloud to be before the left border and slowly move into the visible frame.

// Reset clouds to move again if it's been a while

if (cloudXStart > width * 2.5) cloudXStart = -50;

At first, I tried to draw a cloud using the beginShape() function, but the points kept ending up in spots I didn’t intend for them to be in, so I ended up using a series of ellipses to construct each cloud instead.

Picture of hand drawing work in progress.

To get a random hand color every time the program resets, I created variables that would be declared using the random() function inside setup().

//generate random hand color

handColorR = random(0,255);
handColorG = random(0,255);
handColorB = random(0,255);

Then, I would draw out the actual shapes in the hand with

fill(handColorR, handColorG, handColorB);

At first, I tried to rotate the thumb shape of the rand after drawing out the (rounded corner) rectangle shape. However, it kept rotating the thumb around the point (0,0). I’m not sure how to fix it, so I just left the thumb flat horizontal without rotating for now.

Next I created a blue sea rectangle and a fish swimming inside it. The fish’s color will be the opposite of the hand’s color (so it will be randomized each time as well). At first, I set the fish expression to be black, but realized that it was difficult to see when the fish was randomly generated to be a darker color. Because of this, I set the expression to be white, if the R G B numbers were collectively below a certain point.

// Make features white if color of fish is dark

let fishDarkColor = false;
if (handColorR > 100 && handColorG > 100 && handColorB > 100)
fishDarkColor = true;

For the mouse interaction, if you click and hold the mouse, the fish will smile. It is recommended to click and drag the mouse over the fish’s body back and forth, as if you were petting it.

[ICM] Screen Drawing

I want to use computation to build fictional environments through which people can explore, learn about themselves, and reflect on the physical world around them. I want to learn how we can build immersive worlds digitally that can be translated into physical and the reverse. How can we change our relationship to technology to one of care and humanity? How do people attempt to place sentimentality into emotionless objects? Is there a way to democratize the digital web?  I want to build beautiful worlds such as Taehee Wang’s Queer UV Map diagram, where they use the website to simulate navigation into their body, mortality, and fear. I’m inspired by net artists such as Olia Lialina and Lorna Mills, who use play and humor to inform their work, and Yeseul Song’s Thought Sculpture III, which invites participants to reflect on their senses as they experience digital forms. I love Laurel Scwulst’s philosophy behind creating a handmade web, and am oddly fascinated by broken websites and broken links—they highlight how the technology (often misinterpreted as a neutral third party) is imperfect too, just like the humans that build it. 

In my first prompt, to draw a self portrait, I first envisioned how I could simplify a depiction of myself into shapes. Would it be a QR code that leads to a link that’s significant to me? Should I re-draw an illustration I’ve done in the past, re-interpreting it into a series of numbers and functions? I decided to draw my home desk set-up using the same colors in this past illustration of a desk I’ve done. I’m often drawing and re-drawing my desk numerous times, in different mediums and different desk setups for each apartment I live in. It’s the space I see most frequently every day, the place I truly inhabit. This past August, I moved apartments (the shocking distance from one apartment in Astoria, Queens to another apartment in Astoria just a 6-minute walk away). I figured this would be the perfect opportunity to cement the ephemeral changing space of a desk set-up into yet another drawing. 

(Illustration initially drawn on Procreate, collaged with photo elements.)

As I started coding, I started to separate the parts of my illustration into basic shapes and identified the key colors that made up my illustration.  First, I drew parts of the desk, and then I drew the parts of the monitors. I based all my approximations on the (0, 0) coordinate being top left and (width, height) coordinate being bottom right.

I wanted to try making my drawing scalable, so instead of using exact points, I would approximate the locations using width and height variables. I noticed that my division started to get a little messy haha. I was placing my coordinates all based on eyeballing the placement and proportions.

  • //draw desk

    strokeWeight(0);

    fill(179,148,229);

    quad(width/5, 2*height/3, 4*width/5, 2*height/3, 4*width/5+width/8, 4*height/5, width/5-width/8, 4*height/5);

    fill(135, 89, 212);

    rect(width/5-width/8, 4*height/5, (4*width/5+width/8)-(width/5-width/8), height/15);

I tried to draw out a grid behind the monitors using a while() loop and for() loop, but both made my code crash. I gave up on automating a grid or dot pattern after that. I’m hoping to get answers on how to automate grid drawing next time in class.

  • //tried using loop to draw grid, made it crash

    while (i < 10){

    line(i*width/10, 0,i*width/10, height);} i++;

Then, I tried to make it so that the colors in the computer screens would change over time. I set r, g, and b as variables declared at the beginning of the code (I placed it outside of the functions and it seemed to work there…). As each draw() was called, the r, g, and b variables would either go up by one or go down by one. I also declared upr, upg, and upb as booleans to make sure that the r,g,b numbers stayed within the bounds of 1 and 254.

  • //set r in fill color of monitors

    if (upr==true) {

    r++;

    if (r>=255) upr = false;}

    else if (upr==false) {

    r--;

    if (r<=55) upr = true;}

Then, I added a circle in each of the monitor screens so that the drawing made a :| funny face.

Next, I fiddled with the background color. I landed upon a pale sky blue.

The web editor was quite intuitive to use. I wasn’t quite sure how GitHub would be incorporated into my process quite yet with the p5 web editor. I would also like to host my p5 project independently on a site outside of the p5 web editor someday as well. If I had more time, I’d like to map the coordinates for the rest of the shapes as well. It was more time consuming than I expected to draw out each shape.